Synopsis
Important: Red Hat Ceph Storage 4.1 security, bug fix, and enhancement update
Type/Severity
Security Advisory: Important
Topic
Red Hat Ceph Storage 4.1 is now available.
Red Hat Product Security has rated this update as having a security impact of Important. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.
Description
Red Hat Ceph Storage is a scalable, open, software-defined storage platform that combines the most stable version of the Ceph storage system with a Ceph management platform, deployment utilities, and support services.
Security Fix(es):
- ceph-ansible: hard coded credential in ceph-ansible playbook (CVE-2020-1716)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgements, and other related information refer to the CVE page(s) listed in the References section.
Bug Fix(es) and Enhancement(s):
For detailed information on changes in this release, see the Red Hat Ceph Storage 4.1 Release Notes available at:
https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/4.1/html/release_notes/index
Solution
Before applying this update, make sure all previously released errata
relevant to your system have been applied.
For details on how to apply this update, refer to:
https://access.redhat.com/articles/11258
Affected Products
-
Red Hat Ceph Storage 4 for RHEL 8 x86_64
-
Red Hat Ceph Storage 4 for RHEL 7 x86_64
-
Red Hat Ceph Storage MON 4 for RHEL 8 x86_64
-
Red Hat Ceph Storage MON 4 for RHEL 7 x86_64
-
Red Hat Ceph Storage OSD 4 for RHEL 8 x86_64
-
Red Hat Ceph Storage OSD 4 for RHEL 7 x86_64
-
Red Hat Ceph Storage for Power 4 for RHEL 8 ppc64le
-
Red Hat Ceph Storage for Power 4 for RHEL 7 ppc64le
-
Red Hat Ceph Storage MON for Power 4 for RHEL 8 ppc64le
-
Red Hat Ceph Storage MON for Power 4 for RHEL 7 ppc64le
-
Red Hat Ceph Storage OSD for Power 4 for RHEL 8 ppc64le
-
Red Hat Ceph Storage OSD for Power 4 for RHEL 7 ppc64le
Fixes
- BZ - 1274084 - [RFE] Support for AWS Secure Token Service (STS) with RGW
- BZ - 1553202 - [RFE] Support user creation on secondary zone in multisite environment
- BZ - 1581421 - [RFE] If the nodeep-scrub/noscrub flags are set in pools instead of global cluster. List the pool names in the ceph status
- BZ - 1625951 - [GSS] Recursive move from a directory with double underscore fails
- BZ - 1639817 - RFE: S3 v2 RESTBucketGet
- BZ - 1656512 - [RFE] Single Sign-On (SAML 2.0)
- BZ - 1658491 - [iscsi] add mixed iscsi (ipv4+ipv6) gateways on a ipv4 ceph cluster
- BZ - 1665683 - RGW: presigned URL for PUT with metadata fails with: SignatureDoesNotMatch
- BZ - 1678701 - rgw: org.apache.hadoop.fs.contract.s3a.ITestS3AContractGetFileStatus#testComplexDirActions
- BZ - 1679924 - Add Bluestore compression stats in dashboard
- BZ - 1687971 - [RFE] Bucket Check Commands Should Only Display Error/Orphaned Objects
- BZ - 1716815 - [RFE] Supportability of VMware ESX 6.7 on using Ceph iSCSI gateway
- BZ - 1716972 - bucket listing may repeat some unicode names
- BZ - 1719446 - facing rgw error - "/builddir/build/BUILD/ceph-12.2.8/src/rgw/rgw_sync.cc: In function 'virtual int PurgePeriodLogsCR::operate()' thread 7efe125d1700 .. .../rgw_sync.cc: 2387: FAILED assert(cursor) "
- BZ - 1724428 - The "host" signature in "ceph osd status" remains unchanged on moving an OSD disk from failed node to a new node (workaround: mgr restart)
- BZ - 1731148 - multisite pg_num on site2 pools should use site1/source values
- BZ - 1731554 - multisite: radosgw-admin bucket sync status incorrectly reports "caught up" during full sync
- BZ - 1734583 - [RFE] Improve upmap change reporting in logs
- BZ - 1738334 - [RFE] RGW Garbage collection - offloading omap to RADOS data objects
- BZ - 1741677 - Utilizing upmap for manual data rebalance to selected OSD's
- BZ - 1743388 - Multisite sync should not stuck if the bucket names have : and $
- BZ - 1744276 - sync status reports complete while bucket status does not
- BZ - 1746491 - rgw: avoid calling check_bucket_shards() in the write path (perf, dynamic resharding)
- BZ - 1747206 - [ RFE ] ceph-ansible needs expanded options for multi-site deployments
- BZ - 1747516 - if user doesnt exist then bucket list should give error/info message (saying user doesnt exist) rather than showing empty list
- BZ - 1759700 - MDS should have configurable limit on number of snapshots per directory
- BZ - 1759716 - standby-replay MDS assertion when removing inode
- BZ - 1759725 - [RFE] modify session timeout of individual clients to increase gracetime before blacklisting
- BZ - 1759727 - client requests on snapshotted inode may hang
- BZ - 1760126 - [RFE] Allow mount.ceph to consume keyrings and ceph.conf
- BZ - 1760129 - MDS may hit infinite loop in its Lock management module
- BZ - 1760219 - mgr/volumes: retry spawning purge threads on failure
- BZ - 1761474 - HEALTH_OK is reported with no managers (or OSDs) in the cluster
- BZ - 1761743 - mgr/volumes: allow resizing of FS subvolumes
- BZ - 1762170 - [ceph-dashboard] : Dashboard CLI: sso status command errors with "Error EPERM: Required library not found: `python3-saml`"
- BZ - 1762197 - [ceph-dashboard] Pools - Overall Performance : grafana errors out with TypeError: l.c[t.type] is undefined true
- BZ - 1762852 - When listing of bucket entries, entries following an entry for which check_disk_state() = -ENOENT may not get listed
- BZ - 1764431 - [Ceph-Dashboard] Trash : With no RBD images moved to Trash, Purge Trash execution succeeds
- BZ - 1765517 - [ceph-dashboard] Configuration: Unable to edit the configuration parameter listed as editable
- BZ - 1765530 - [ceph-dashboard] Fix Typo/UI in Dashboard pages
- BZ - 1765536 - [ceph-dashboard] Pools - Performance Details ; grafana icon is available
- BZ - 1767144 - mgr/volumes: support cloning subvolumes from snapshot
- BZ - 1771206 - MDS should not assert on frozen directory during scrubbing
- BZ - 1771208 - Client assert failure when importing caps after trimming
- BZ - 1775218 - mgr/volumes: allow setting uid, gid of subvolume and subvolume group during creation
- BZ - 1775266 - mgr/volumes: extend the `fs volume rm` protection to include MDS tear down
- BZ - 1775404 - osd: add osd_fast_shutdown option
- BZ - 1777064 - [RFE] rgw: support Hashcorp Vault as secrets store for S3 SSE-KMS
- BZ - 1777380 - ceph-mgr may drop metadata sent during session open by daemons
- BZ - 1779186 - [ceph-dashboard] Grafana : Error displayed on refresh or change of time
- BZ - 1782253 - [RFE] Support Deployment with Autoscaler Enabled
- BZ - 1783223 - [ceph-ansible] : switch from rpm to containerized - default osd health check retry needs to be higher
- BZ - 1784011 - ceph-ansible should be able to configure multiple grafana instances when ceph dashboard is deployed
- BZ - 1784405 - [ceph-dashboard] PG scrub: exception 'deque mutated during iteration' hit with scrub Auto Repair enabled.
- BZ - 1784729 - [ceph-dashboard Dashboard CLI : command SSO Status fails `python-saml`" on rhel 7.7
- BZ - 1784746 - [ceph-mgr] 'ceph mgr module ls' command does not have the always_on_modules listed in enabled modules
- BZ - 1784895 - RBD block devices remained mapped after concurrent "rbd unmap" failures
- BZ - 1785363 - [ceph-mgr] ceph balancer status taking a long time to return
- BZ - 1785472 - MDS may assert due to creating too many openfiletable objects
- BZ - 1785474 - ceph-fuse client is blacklisted because it fails to respond to MDS cap revoke
- BZ - 1785475 - MDS may crash if no snaprealm is is encoded on root inode
- BZ - 1785476 - linkages injected by cephfs-data-scan have first == head
- BZ - 1785477 - MDS reports unrecognized message for mgrclient messages
- BZ - 1785478 - ceph-mgr volumes plugin connection trimming is not functioning
- BZ - 1785580 - [ceph-dashboard][User Management] Pool:: login with pool user lists Manager modules
- BZ - 1785646 - [ceph-dashboard]RGW-User: User details for system is blank always, should reflect the status
- BZ - 1785736 - Purging storage clusters fail " registry.redhat.io/openshift4/ose-prometheus-node-exporter:v4.1 msg: '[Errno 2] No such file or directory'"
- BZ - 1786107 - [[ceph-ansible] Grafana version v5.4.3, required in downstream compose
- BZ - 1786173 - On a non-existent bucket, a get and put versioning returns HTTP Response 200
- BZ - 1786287 - [ceph-dashboard]UI : RGW: Add capability modal window should not appear when all the capabilities are added
- BZ - 1786457 - [ceph-dashboard]iSCSI: Edit target of more than one parameter causes exception dashboard.rest_client.RequestException: iscsi REST API failed request with status code 400
- BZ - 1786684 - [ceph-ansible][ceph-dashboard]Purge-cluster.yml on cluster with dashboard fails at "remove node-exporter image" with error 'it is being used by 1 containers'
- BZ - 1788347 - reduce min_alloc_size to improve space usage for small objects
- BZ - 1788917 - [ceph-dashboard]Crush map viewer: color on destroyed osd not visible until its highlighted
- BZ - 1789357 - [Ceph-Dashboard] Host -> of selected osd performance details -> no changes in Raw capacity and OSD Disk Performance Statistics
- BZ - 1790472 - [RFE] [ceph-ansible] FS to BS migration - fail playbook if all OSDs in hosts are already bluestore OSDs
- BZ - 1790479 - [RFE] [ceph-ansible] : FS to BS migration - reuse journal partition / journal vm for bluestore OSDs for db/wal
- BZ - 1791174 - [Ceph-dashboard] User Management : User with Custom Role getting 403 redirection error and gets access denied
- BZ - 1792222 - unable to disable the dashboard redirect mechanism
- BZ - 1792225 - prometheus cluster is not configured correctly when deploying multiple instances
- BZ - 1792230 - missing settings to configure the mgr dashboard module binding port/address
- BZ - 1792320 - "ceph-handler : unset noup flag attempts to use container not on host
- BZ - 1793542 - ceph-volume lvm batch errors on OSD systems w/HDDs and multiple NVMe devices
- BZ - 1793564 - [ceph-ansible] : rolling_update : norebalance flag is to be unset when playbook completes
- BZ - 1794351 - [RFE]: Rolling_update fails on Dashboard role trying to create radosgw system user on Multiste secondary
- BZ - 1794713 - [ceph-dashboard] read-only user can display RGW API keys
- BZ - 1794715 - [RGW] Slow lc processing resulting in too much backlog
- BZ - 1795406 - Misleading error message during realm pull
- BZ - 1795592 - CVE-2020-1716 ceph-ansible: hard coded credential in ceph-ansible playbook
- BZ - 1796160 - [ceph-ansible] nfs deployment in rhcs4 fails in selinux enforcing mode
- BZ - 1796453 - [ceph-ansible]: ansible-playbook shrink-osd.yml -e osd_to_kill=osd_id fails on upgraded 4.x cluster
- BZ - 1796853 - Need the logic to intelligently enable ceph 4 repos on RHEL 7.7 from CDN
- BZ - 1797161 - [RHCS 4.x] Deprecate radosgw-admin orphan * commands as radoslist will be available from bug 1770955
- BZ - 1797817 - API/ABI break in rgw_lookup
- BZ - 1798153 - Dashboard does not allow you to set norebalance OSD flag
- BZ - 1798718 - openfiletable max omap k/v pairs should be same as osd_deep_scrub_large_omap_object_key_threshold
- BZ - 1798719 - mount.ceph fails with ERANGE if name= option is longer than 37 characters
- BZ - 1798781 - The ceph client role does not support --limit
- BZ - 1802199 - perf regression due to bluefs_buffered_io=true
- BZ - 1805347 - [RFE] ceph upmap balancer module to provide expected number of moves required to get to a specific deviation
- BZ - 1805391 - rgw: fix bug with (un)ordered bucket listing and marker w/ namespace
- BZ - 1805643 - [RFE] purge-container-cluster: support OSDs with 3 digits
- BZ - 1807085 - ceph-ansible 4.0.x does not work with ansible 2.9
- BZ - 1807184 - Slow Requests/OP's types not getting logged in cluster log
- BZ - 1808046 - Fix Grafana dashboard wrong Pool capacity
- BZ - 1808345 - [Ceph-dashboard] Not able to login to dashboard
- BZ - 1808495 - ceph-ansible sets expected_num_objects unexpectedly when creating pools
- BZ - 1809242 - MDS SIGSEGV in Migrator::export_sessions_flushed
- BZ - 1810121 - rgw: cls/queue: fix data corruption in urgent data
- BZ - 1810551 - Update branding to 4.1
- BZ - 1810610 - Multisite not fully syncing all buckets
- BZ - 1810884 - [Ceph-dashboard] Object Gateway : Daemons Overall Performance has no reflection in Average GET/PUT Latencies
- BZ - 1810948 - Pool read/write OPS shows too many decimal places
- BZ - 1811547 - remove steps asking users to copy playbooks from infrastructure-playbooks to /usr/share/ceph-ansible
- BZ - 1813349 - rgw: raise default bucket-index sharding to 11 (OCS, Perf)
- BZ - 1814082 - Disk failure prediction features does not work on RHEL8.1 with smartmontools 6.6
- BZ - 1814380 - notification: topic actions fail with "MethodNotAllowed"
- BZ - 1814542 - purge-docker-cluster.yml doesn't disable stopped osd service
- BZ - 1814806 - [Ceph-dashboard] Error when enabling SSO with certificate file
- BZ - 1814942 - Internal Ganesha with external Ceph fails with /etc/ceph/ceph.manila.keyring not found
- BZ - 1815211 - [RFE] rgw: provide a subcommand under radosgw-admin to list rados objects for a given bucket
- BZ - 1815239 - Incorrect number of entries returned during delimiter based bucket listing
- BZ - 1815261 - 'swift stat' command causes apparent infinite osd_op loop
- BZ - 1815390 - [ceph-dashboard] SSO signout error " 500 Internal server error" is seen when using Python SAML v 1.8.0-1 pkg
- BZ - 1815579 - pg_autoscaler: treat target ratios as weights
- BZ - 1816713 - [Ceph-Ansible] more than 1 bluestore_wal_devices being processed as single device in playbook run
- BZ - 1816989 - [Device Classes] Make "default: true" option valid for new pools even though osd_crush_location is not defined
- BZ - 1817069 - radosgw are not committing dynamic resharding
- BZ - 1817586 - Failing to deploy HCI setup, ceph-ansible failing, osd_pool_default_crush_rule is not defined
- BZ - 1817985 - [RGW] RGW daemon crashes on delete an object and a service restart
- BZ - 1819302 - python3-saml cannot be installed on el8
- BZ - 1819681 - ceph osd migration to systemd fails on undefined variable
- BZ - 1820233 - [ceph-dashboard] SAML pkgs are not getting installed with the latest composes on container setup
- BZ - 1820272 - Add python-saml dependency to ceph.spec
- BZ - 1820560 - [ceph-dashboard] standby mgr redirects to a IP address instead of a FQDN URL
- BZ - 1821784 - rgw: handle "BucketAlreadyExists" error in cloud sync module
- BZ - 1822153 - [ceph-volume] Volume group name has invalid characters ,playbook fails at osd creation task
- BZ - 1822328 - Standalone Ganesha deployment with external Ceph fails as some tasks shouldn't be delegated to MONs
- BZ - 1822482 - [RGW) radosgw-admin crash observed upon lc process command execution
- BZ - 1822599 - ceph-ansible fails adding a new odd when crush_rule_config is enabled
- BZ - 1822902 - [RGW]: SElinux denials observed on teuthology multisite run
- BZ - 1822905 - [RGW]: multiple new SElinux denials on NFS v3
- BZ - 1823975 - msgr: EventCenter-related fixes
- BZ - 1824263 - [Storage Workload DFG] [RHCS 4.1] Release-based Workload Testing
- BZ - 1825104 - STS assume_role: S3 operations like get and put bucket versioning are denied
- BZ - 1825149 - STS assume_role_web_Identity: Error 403 on doing a AssumeRoleWithWebIdentity call
- BZ - 1825288 - RGW asserts on a delete_bucket operation.
- BZ - 1825827 - [rgw]RGW daemon crash on rgwlc- lifecycle_thr_
- BZ - 1825988 - [ceph-dashboard] sometimes an error occurs when performing actions on RBD images
- BZ - 1826884 - rolling_upgrade.yaml is failing with error "fail on unregistered red hat rhcs linux"
- BZ - 1827299 - [ceph-ansible][ceph-dashboard] ceph containerized deployment in ipv6 format leads to health error and not able to login to dashboard
- BZ - 1827781 - [RGW][nfs-ganesha]: nfs-ganesha daemon crash on listing a bucket over 1M objects
- BZ - 1827785 - [rgw] RGW daemon crash on thread lifecycle_thr_ : RGWSI_Zone::need_to_log_data
- BZ - 1827789 - [rgw]: radogw-admin lc list command, always returns empty list
- BZ - 1827799 - [rgw][nfs-ganesha]: objects deleted via lc are listing after a while
- BZ - 1829804 - [ceph-ansible] nfs deployment fails in selinux enforcing on rhel8.2
- BZ - 1831119 - [BAREMETAL] rook-ceph-mgr pod restarted with assert message
- BZ - 1831285 - OSP 16/RHCS 4.1 external Ceph with internal ganesha fails - /var/lib/ceph/bootstrap-rgw/ceph.keyring must exist.
- BZ - 1831342 - standalone nfs fails with "error while evaluating conditional (not item.get('skipped', False)): 'list object' has no attribute 'get'"
- BZ - 1833063 - OSDs are crashing with segmentation fault in thread_name:tp_osd_tp'
- BZ - 1834790 - RGW: Objects in GC queue are not drained out.
CVEs
References